Goto

Collaborating Authors

 digital breast tomosynthesis



Knowledge-based in silico models and dataset for the comparative evaluation of mammography AI for a range of breast characteristics, lesion conspicuities and doses

Neural Information Processing Systems

Precise mass location and extent (e.g., mass boundaries) are typically not available in the patient's records, and it is burdensome, error-prone, and sometimes impossible to


T-SYNTH: A Knowledge-Based Dataset of Synthetic Breast Images

Wiedeman, Christopher, Sarmakeeva, Anastasiia, Sizikova, Elena, Filienko, Daniil, Lago, Miguel, Delfino, Jana G., Badano, Aldo

arXiv.org Artificial Intelligence

Responsible for approximately two million new cases and over six hundred thousand deaths in 2022 alone (Sung et al., 2021), breast cancer remains a prominent global health concern, and is expected to account nearly one-third of all newly diagnosed cancers among women in the United States (DeSantis et al., 2016). According to the most recent report from International Agency for Research on Cancer (Bray et al., 2024), it is one of the most widespread cancers diagnosed worldwide, both in the number of cases and associated deaths. Consequently, medical imaging techniques are indispensable for screening, diagnosis, and further research into the disease. Historically, the most common imaging technique for breast cancer screening is digital mammography (DM), in which a 2D x-ray projection of a compressed breast is taken. Digital breast tomosynthesis (DBT), a pseudo-3D imaging technique, has been increasingly adopted, demonstrating improved screening performance (Asbeutah et al., 2019; Sprague et al., 2023).


CoMoTo: Unpaired Cross-Modal Lesion Distillation Improves Breast Lesion Detection in Tomosynthesis

Alberb, Muhammad, Elbatel, Marawan, Elgebaly, Aya, Montoya-del-Angel, Ricardo, Li, Xiaomeng, Martí, Robert

arXiv.org Artificial Intelligence

Digital Breast Tomosynthesis (DBT) is an advanced breast imaging modality that offers superior lesion detection accuracy compared to conventional mammography, albeit at the trade-off of longer reading time. Accelerating lesion detection from DBT using deep learning is hindered by limited data availability and huge annotation costs. A possible solution to this issue could be to leverage the information provided by a more widely available modality, such as mammography, to enhance DBT lesion detection. In this paper, we present a novel framework, CoMoTo, for improving lesion detection in DBT. Our framework leverages unpaired mammography data to enhance the training of a DBT model, improving practicality by eliminating the need for mammography during inference. Specifically, we propose two novel components, Lesion-specific Knowledge Distillation (LsKD) and Intra-modal Point Alignment (ImPA). LsKD selectively distills lesion features from a mammography teacher model to a DBT student model, disregarding background features. ImPA further enriches LsKD by ensuring the alignment of lesion features within the teacher before distilling knowledge to the student. Our comprehensive evaluation shows that CoMoTo is superior to traditional pretraining and image-level KD, improving performance by 7% Mean Sensitivity under low-data setting. Our code is available at https://github.com/Muhammad-Al-Barbary/CoMoTo .


Study: AI Improves Cancer Detection Rate for Digital Mammography and Digital Breast Tomosynthesis

#artificialintelligence

The use of adjunctive artificial intelligence (AI) doubled the positive predictive value (PPV) of digital mammography (DM) exams overall and led to greater than 90 percent accuracy for DM and digital breast tomosynthesis (DBT) in detecting breast cancer in women with elevated risk, according to research findings presented recently at the European Congress of Radiology (ECR) conference in Vienna, Austria. For the study, researchers compared the use of adjunctive AI (Transpara version 1.7.0, ScreenPoint Medical) in 11,988 women (between the ages of 50 and 74) who had DM or DBT screening exams versus 16,555 women screened with DM or DBT the year before without AI support. In the AI group, 5,049 women had DM screening with the Hologic Selenia device and 6,949 women had DBT screening with the Hologic Selenia Dimensions device, according to the study. For the non-AI cohort, 7,229 women had DM screening and 9,326 women had DBT.


An efficient deep neural network to find small objects in large 3D images

Park, Jungkyu, Chłędowski, Jakub, Jastrzębski, Stanisław, Witowski, Jan, Xu, Yanqi, Du, Linda, Gaddam, Sushma, Kim, Eric, Lewin, Alana, Parikh, Ujas, Plaunova, Anastasia, Chen, Sardius, Millet, Alexandra, Park, James, Pysarenko, Kristine, Patel, Shalin, Goldberg, Julia, Wegener, Melanie, Moy, Linda, Heacock, Laura, Reig, Beatriu, Geras, Krzysztof J.

arXiv.org Artificial Intelligence

3D imaging enables accurate diagnosis by providing spatial information about organ anatomy. However, using 3D images to train AI models is computationally challenging because they consist of 10x or 100x more pixels than their 2D counterparts. To be trained with high-resolution 3D images, convolutional neural networks resort to downsampling them or projecting them to 2D. We propose an effective alternative, a neural network that enables efficient classification of full-resolution 3D medical images. Compared to off-the-shelf convolutional neural networks, our network, 3D Globally-Aware Multiple Instance Classifier (3D-GMIC), uses 77.98%-90.05% less GPU memory and 91.23%-96.02% less computation. While it is trained only with image-level labels, without segmentation labels, it explains its predictions by providing pixel-level saliency maps. On a dataset collected at NYU Langone Health, including 85,526 patients with full-field 2D mammography (FFDM), synthetic 2D mammography, and 3D mammography, 3D-GMIC achieves an AUC of 0.831 (95% CI: 0.769-0.887) in classifying breasts with malignant findings using 3D mammography. This is comparable to the performance of GMIC on FFDM (0.816, 95% CI: 0.737-0.878) and synthetic 2D (0.826, 95% CI: 0.754-0.884), which demonstrates that 3D-GMIC successfully classified large 3D images despite focusing computation on a smaller percentage of its input compared to GMIC. Therefore, 3D-GMIC identifies and utilizes extremely small regions of interest from 3D images consisting of hundreds of millions of pixels, dramatically reducing associated computational challenges. 3D-GMIC generalizes well to BCS-DBT, an external dataset from Duke University Hospital, achieving an AUC of 0.848 (95% CI: 0.798-0.896).


Detection of masses and architectural distortions in digital breast tomosynthesis: a publicly available dataset of 5,060 patients and a deep learning model

Buda, Mateusz, Saha, Ashirbani, Walsh, Ruth, Ghate, Sujata, Li, Nianyi, Święcicki, Albert, Lo, Joseph Y., Mazurowski, Maciej A.

arXiv.org Artificial Intelligence

Breast cancer screening is one of the most common radiological tasks with over 39 million exams performed each year. While breast cancer screening has been one of the most studied medical imaging applications of artificial intelligence, the development and evaluation of the algorithms are hindered due to the lack of well-annotated large-scale publicly available datasets. This is particularly an issue for digital breast tomosynthesis (DBT) which is a relatively new breast cancer screening modality. We have curated and made publicly available a large-scale dataset of digital breast tomosynthesis images. It contains 22,032 reconstructed DBT volumes belonging to 5,610 studies from 5,060 patients. This included four groups: (1) 5,129 normal studies, (2) 280 studies where additional imaging was needed but no biopsy was performed, (3) 112 benign biopsied studies, and (4) 89 studies with cancer. Our dataset included masses and architectural distortions which were annotated by two experienced radiologists. Additionally, we developed a single-phase deep learning detection model and tested it using our dataset to serve as a baseline for future research. Our model reached a sensitivity of 65% at 2 false positives per breast. Our large, diverse, and highly-curated dataset will facilitate development and evaluation of AI algorithms for breast cancer screening through providing data for training as well as common set of cases for model validation. The performance of the model developed in our study shows that the task remains challenging and will serve as a baseline for future model development.


Breast Cancer Screening – Digital Breast Tomosynthesis (BCS-DBT) - The Cancer Imaging Archive (TCIA) Public Access - Cancer Imaging Archive Wiki

#artificialintelligence

Breast cancer is among the most common cancers and a common cause of death among women. Over 39 million breast cancer screening exams are performed every year and are among the most common radiological tests. This creates a high need for accurate image interpretation. Machine learning has shown promise in interpretation of medical images. However, limited data for training and validation remains an issue.


New AI-driven technology for breast cancer screening

#artificialintelligence

This included the technology ProFound AI for Digital Breast Tomosynthesis (DBT), which is said to be the first artificial intelligence software for DBT to be approved by the U.S. Food and Drug Administration (FDA). Also on offer at the event were medical software solutions designed for 2D mammography and to assess breast density. During the meeting, the iCAD unveiled its vision for future technologies. This predictive aspect included technologies that should enable clinicians to more easily interpret patients' earlier images and prospective breast cancer risk assessment to form a clearer picture of the specific patient's condition. Clinical data from a large reader study involving ProFound AI for DBT were recently published in the journal Radiology: Artificial Intelligence ("Improving Accuracy and Efficiency with Concurrent Use of Artificial Intelligence for Digital Breast Tomosynthesis").